|
Krippendorff's alpha coefficient〔Krippendorff, K. (2013) pp. 221-250 describes the mathematics of ''alpha'' and its use in content analysis since 1969.〕 is a statistical measure of the agreement achieved when coding a set of units of analysis in terms of the values of a variable. Since the 1970s, ''alpha'' is used in content analysis where textual units are categorized by trained readers, in counseling and survey research where experts code open-ended interview data into analyzable terms, in psychological testing where alternative tests of the same phenomena need to be compared, or in observational studies where unstructured happenings are recorded for subsequent analysis. Krippendorff’s alpha generalizes several known statistics, often called measures of inter-coder agreement, inter-rater reliability, reliability of coding given sets of units (as distinct from unitizing) but it also distinguishes itself from statistics that are called reliability coefficients but are unsuitable to the particulars of coding data generated for subsequent analysis. Krippendorff’s alpha is applicable to any number of coders, each assigning one value to one unit of analysis, to incomplete (missing) data, to any number of values available for coding a variable, to binary, nominal, ordinal, interval, ratio, polar, and circular metrics (Levels of Measurement), and it adjusts itself to small sample sizes of the reliability data. The virtue of a single coefficient with these variations is that computed reliabilities are comparable across any numbers of coders, values, different metrics, and unequal sample sizes. Software for calculating Krippendorff’s alpha is available.〔Hayes, A. F. & Krippendorff, K. (2007) describe and (provide SPSS and SAS macros for computing ''alpha'', its confidence limits and the probability of failing to reach a chosen minimum. )〕〔(Manual page of the kripp.alpha() function ) for the platform independent statistics package R〕〔(The Alpha resources page. )〕〔(Matlab code to compute Krippendorff's alpha. )〕 ==Reliability data== Reliability data are generated in a situation in which ''m'' ≥ 2 jointly instructed (e.g., by a Code book) but independently working coders assign any one of a set of values ''1,...,V'' to a common set of ''N'' units of analysis. In their canonical form, reliability data are tabulated in an ''m''-by-''N'' matrix containing ''N'' values ''vij'' that coder ''ci'' has assigned to unit ''uj''. Define ''mj'' as the number of values assigned to unit ''j'' across all coders ''c''. When data are incomplete, ''mj'' may be less than ''m''. Reliability data require that values be pairable, i.e., ''mj'' ≥ 2. The total number of pairable values is ''n'' ≤ ''mN''. To help clarify, here is what the canonical form looks like, in the abstract: 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Krippendorff's alpha」の詳細全文を読む スポンサード リンク
|